Second-order linear ordinary differential equations (ODEs) are equations of the form
where , , , and are given functions of the independent variable , and is the unknown function we want to solve for.
These are ubiquitous in various applications. A damped, driven harmonic oscillator, for instance, is described by a second-order linear ODE of this form.
In this section, we will explore methods for solving second-order linear ODEs, including the method of undetermined coefficients and variation of parameters.
We first consider the homogeneous case, where , and consider the case of constant coefficients, where , , and are constants.
The general form of such an equation is
We make the ansatz , where is a constant to be determined.
There are a few justifications for this choice.
If we treat the left-hand side as a linear operator acting on , i.e.,
then exponentials are eigenfunctions of differential operators, meaning that applying the operator to an exponential function results in the same function multiplied by a constant (the eigenvalue).
This property makes exponentials a natural choice for solving linear differential equations.
Another line of reasoning comes from Laplace transforms.
Anyways, plugging in our ansatz into the ODE gives the characteristic equation
The nature of the roots of this quadratic equation determines the form of the general solution to the ODE:
Distinct real roots: ,
Repeated real root: , and
Complex conjugate roots (): .
We can see that the general solution is a linear combination of two linearly independent solutions, as expected for a second-order ODE.
To see whether two roots are linearly independent, we recall that linear independence means that no non-trivial linear combination of the two solutions can yield the zero function.
This means that if we have two solutions and , they are linearly independent if the only solution to the equation
is . Differentiating this gives
Combining these two equations, we can write the system in matrix form:
If and are linearly independent, the only solution is the trivial one, which means that the determinant of the coefficient matrix must be non-zero.
This determinant is known as the Wronskian:
If for some value of , then and are linearly independent.
Next, we consider the non-homogeneous case, where . If we divide the entire equation by , we can write the ODE in standard form:
where and .
To solve this equation, we define its corresponding homogeneous equation by setting :
Notice that if we have a that solves the homogeneous equation and a that solves the non-homogeneous equation, then their sum also solves the non-homogeneous equation.
This means that to get a general solution to the non-homogeneous equation, we can find the general solution to the homogeneous equation and add it to a particular solution of the non-homogeneous equation.
To find the particular solution , we can use methods such as undetermined coefficients or variation of parameters.
In this method, we make an ansatz for the particular solution based on the form of (typically using a table of common forms), and then determine the coefficients by substituting into the non-homogeneous equation.
The ansatz must be linearly independent of the homogeneous solution .
In this method, we use the homogeneous solution to construct a particular solution .
Suppose the complementary functions are given by and , so that the general solution to the homogeneous equation is , where and are functions to be determined.
Then, we use the Wronskian defined earlier as
Misplaced &
Then, we can find a particular solution using the formulas
The proof of this method involves substituting into the non-homogeneous equation and using the properties of the Wronskian to simplify the resulting expressions.
As is not in the table of common forms, we will use the method of variation of parameters.
The corresponding homogeneous equation is
whose characteristic equation is
with roots . Thus, the complementary functions are
The Wronskian is
Our particular solution is then given by
Thus, the general solution to the non-homogeneous equation is
Example: Consider the non-homogeneous equation
This is an equation where the inhomogeneous term is an oscillatory function, perhaps modeling a driven system with a complex exponential driving force.
The corresponding homogeneous equation is
whose characteristic equation is
with roots and . Thus, the complementary functions are
Next, we use the trial function for the particular solution based on the form of the inhomogeneous term.
Since is with , we try
Plugging this into the non-homogeneous equation gives
Simplifying, we have
Thus, the general solution to the non-homogeneous equation is
Cauchy-Euler equations are second-order ODEs where the coefficients of the derivatives are powers of the independent variable. They have the general form
where , , and are constants.
We will see this form arise in problems with spherical or cylindrical symmetry (e.g., Laplace's equation in spherical coordinates).
To solve a Cauchy-Euler equation, we make the ansatz , where is a constant to be determined.
Its derivatives are
Plugging these into the ODE gives the auxiliary equation
Once again, the nature of the roots of this quadratic equation determines the form of the general solution to the ODE:
Sometimes, a second-order ODE may be nonlinear and hence more difficult to solve. Generally they have the form
However, if some conditions are met, we can reduce the order of the equation.
If the equation does not explicitly depend on , i.e.,
we can make the substitution , which gives . Then, we can rewrite the equation as
This is now a first-order ODE in terms of and , which we can attempt to solve using methods for first-order ODEs. You have probably seen this used extensively in mechanics problems.
Next, if the equation does not explicitly depend on , i.e.,
we can again make the substitution , which gives . Then, we can rewrite the equation as
This is now a first-order ODE in terms of and , which we can attempt to solve using methods for first-order ODEs.
Second-order ODEs arise frequently in physics, especially in classical mechanics. There are many situations where special functions arise as solutions to second-order ODEs.
In systems with cylindrical symmetry, such as heat conduction in a cylindrical rod or wave propagation in a circular membrane, Bessel's equation often appears. The standard form of Bessel's equation is
where is a constant (the order of the Bessel function). The solutions to this equation are the Bessel functions of the first and second kind, denoted by and , respectively. The first kind is finite at the origin, while the second kind diverges there.
In systems with spherical symmetry, such as gravitational and electrostatic potentials, Legendre's equation often appears. The standard form of Legendre's equation is
where is a non-negative integer. The solutions to this equation are the Legendre polynomials, denoted by . These polynomials are orthogonal over the interval .
In the quantum mechanics of the hydrogen atom, the radial part of the Schrödinger equation leads to the associated Laguerre equation. The standard form of the associated Laguerre equation is
where is a non-negative integer and is a parameter related to the angular momentum quantum number. The solutions to this equation are the associated Laguerre polynomials, denoted by .
When the coefficients of a second-order linear ODE are not constant, many of the methods discussed earlier may not be applicable. In such cases, we can attempt to find a solution in the form of a power series expansion about a point .
Some terminology: a point is an ordinary point of the ODE if the functions and (from the standard form) are analytic (i.e., can be expressed as convergent power series) at . If is not an ordinary point, it is a singular point. A singular point is regular if both and are analytic at ; otherwise, it is an irregular singular point.
This method is used to find power-series solutions about ordinary points.
We will just consider the case where for simplicity; the method can easily be generalized to other points. We assume a solution of the form
where are coefficients to be determined.
Its derivatives are
Next, we plug these into the ODE and shift the indices of the sums so that all terms are expressed as powers of .
Then, we equate the coefficients of each power of to zero, yielding a recurrence relation for the coefficients .
By solving this recurrence relation, we can find the coefficients and thus construct the power-series solution to the ODE.
Example: The Airy equation is given by
Using a power series expansion about the ordinary point , derive the recurrence relation for the coefficients of the series solution.
We assume a solution of the form
Then plugging these into the Airy equation gives
or equivalently
In the second sum, we shift the index by letting (so that ), giving
Now the powers of match. We pull out the first two terms of the first sum to start both sums from :
This gives us and for , leading to the recurrence relation
This relation allows us to compute all coefficients in terms of the initial coefficients , , and (with ). Recall that since is analytic at , it must be equal to its Taylor series expansion there, so . This tells us that
As such, and are arbitrary constants that depend on the initial conditions of the problem. Two linearly independent solutions can be obtained as follows:
These correspond to the two linearly independent solutions of the Airy equation, known as the Airy functions of the first and second kind, denoted by and , respectively.
This method is used to find power-series solutions about regular singular points.
We again consider the case where for simplicity; the method can easily be generalized to other points. Essentially, we assume a solution allowing noninteger powers to account for the singularity:
where is a constant to be determined, and are coefficients to be determined.
Its derivatives are
Plugging these into the ODE will yield a quadratic equation in , known as the indicial equation, which looks like
where and are the coefficients of the lowest-order terms in the power series expansions of and about (i.e., and ). This is why we require to be a regular singular point by the way. If the point was not , we would shift the variable accordingly, with and .
The nature of the roots of the indicial equation determines the form of the general solution to the ODE:
Distinct roots not differing by an integer (): two linearly independent solutions of the form
Repeated root (): one solution uses , the other involves a logarithmic term:
Roots differing by a positive integer (): one solution uses , the other is more complicated and may involve logarithmic terms depending on the specific equation.
Then, by plugging the assumed solution into the ODE and equating coefficients of like powers, we can derive recurrence relations for the coefficients . To find the coefficients , we may need to differentiate with respect to and evaluate at .